Spearman's Rank Correlation Coefficient
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, Spearman's rank correlation coefficient or Spearman's ''ρ'', named after
Charles Spearman Charles Edward Spearman, FRS (10 September 1863 – 17 September 1945) was an English psychologist known for work in statistics, as a pioneer of factor analysis, and for Spearman's rank correlation coefficient. He also did seminal work on mod ...
and often denoted by the Greek letter \rho (rho) or as r_s, is a
nonparametric Nonparametric statistics is the branch of statistics that is not based solely on Statistical parameter, parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based ...
measure of
rank correlation In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment o ...
( statistical dependence between the
ranking A ranking is a relationship between a set of items such that, for any two items, the first is either "ranked higher than", "ranked lower than" or "ranked equal to" the second. In mathematics, this is known as a weak order or total preorder of o ...
s of two variables). It assesses how well the relationship between two variables can be described using a
monotonic function In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order ...
. The Spearman correlation between two variables is equal to the
Pearson correlation In statistics, the Pearson correlation coefficient (PCC, pronounced ) ― also known as Pearson's ''r'', the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ...
between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other. Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1)
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
(i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of −1) rank between the two variables. Spearman's coefficient is appropriate for both
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
and discrete
ordinal variable Ordinal data is a categorical, statistical data type where the variables have natural, ordered categories and the distances between the categories are not known. These data exist on an ordinal scale, one of four levels of measurement described b ...
s. Both Spearman's \rho and Kendall's \tau can be formulated as special cases of a more
general correlation coefficient In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment ...
.


Definition and calculation

The Spearman correlation coefficient is defined as the
Pearson correlation coefficient In statistics, the Pearson correlation coefficient (PCC, pronounced ) ― also known as Pearson's ''r'', the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ...
between the rank variables. For a sample of size ''n'', the ''n''
raw score Raw data, also known as primary data, are ''data'' (e.g., numbers, instrument readings, figures, etc.) collected from a source. In the context of examinations, the raw data might be described as a raw score (after test scores). If a scientist ...
s X_i, Y_i are converted to ranks \operatorname(), \operatorname(), and r_s is computed as : r_s = \rho_ = \frac , where : \rho denotes the usual
Pearson correlation coefficient In statistics, the Pearson correlation coefficient (PCC, pronounced ) ― also known as Pearson's ''r'', the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ...
, but applied to the rank variables, : \operatorname(\operatorname(X), \operatorname(Y)) is the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the ...
of the rank variables, : \sigma_ and \sigma_ are the standard deviations of the rank variables. Only if all ''n'' ranks are ''distinct integers'', it can be computed using the popular formula : r_s = 1 - \frac, where : d_i = \operatorname(X_i) - \operatorname(Y_i) is the difference between the two ranks of each observation, : ''n'' is the number of observations. Consider a bivariate sample (x_i, y_i),\, i=1\dots, n with corresponding ranks (R(X_i), R(Y_i)) = (R_i, S_i). Then the Spearman correlation coefficient of x,y is : r_ = \frac, where, as usual, \overline = \textstyle\frac\textstyle\sum_^R_, \overline = \textstyle\frac\textstyle\sum_^S_, \sigma_^ = \textstyle\frac\textstyle\sum_^(R_ - \overline)^, and \sigma_^ = \textstyle\frac\textstyle\sum_^(S_ - \overline)^, We shall show that r_ can be expressed purely in terms of d_ := R_ - S_, provided we assume that there be no ties within each sample. Under this assumption, we have that R,S can be viewed as random variables distributed like a uniformly distributed random variable, U, on \. Hence \overline=\overline=\mathbb /math> and \sigma_^=\sigma_^=\mathrm(U)=\mathbb ^\mathbb , where \mathbb = \textstyle\frac\textstyle\sum_^ i = \textstyle\frac, \mathbb ^= \textstyle\frac\textstyle\sum_^ i^ = \textstyle\frac, and thus \mathrm(U) = \textstyle\frac - \left(\textstyle\frac\right)^ = \textstyle\frac. (These sums can be computed using the formulas for the
triangular number A triangular number or triangle number counts objects arranged in an equilateral triangle. Triangular numbers are a type of figurate number, other examples being square numbers and cube numbers. The th triangular number is the number of dots in ...
and Square pyramidal number, or basic summation results from discrete mathematics.) Observe now that : \begin \frac\sum_^R_S_ - \overline\overline &= \frac\sum_^\frac(R_^ + S_^ - d_^) - \overline^\\ &= \frac\frac\sum_^R_^ + \frac\frac\sum_^S_^ - \frac\sum_^d_^ - \overline^\\ &= (\frac\sum_^R_^ - \overline^) - \frac\sum_^d_^\\ &= \sigma_^ - \frac\sum_^d_^\\ &= \sigma_\sigma_ - \frac\sum_^d_^\\ \end Putting this all together thus yields : r_s = \frac = 1 - \frac = 1 - \frac. Identical values are usually each assigned fractional ranks equal to the average of their positions in the ascending order of the values, which is equivalent to averaging over all possible permutations. If ties are present in the data set, the simplified formula above yields incorrect results: Only if in both variables all ranks are distinct, then \sigma_ \sigma_ = \operatorname = \operatorname = (n^2 - 1)/12 (calculated according to biased variance). The first equation — normalizing by the standard deviation — may be used even when ranks are normalized to , 1("relative ranks") because it is insensitive both to translation and linear scaling. The simplified method should also not be used in cases where the data set is truncated; that is, when the Spearman's correlation coefficient is desired for the top ''X'' records (whether by pre-change rank or post-change rank, or both), the user should use the Pearson correlation coefficient formula given above.


Related quantities

There are several other numerical measures that quantify the extent of statistical dependence between pairs of observations. The most common of these is the
Pearson product-moment correlation coefficient In statistics, the Pearson correlation coefficient (PCC, pronounced ) ― also known as Pearson's ''r'', the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ...
, which is a similar correlation method to Spearman's rank, that measures the “linear” relationships between the raw numbers rather than between their ranks. An alternative name for the Spearman
rank correlation In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment o ...
is the “grade correlation”; in this, the “rank” of an observation is replaced by the “grade”. In continuous distributions, the grade of an observation is, by convention, always one half less than the rank, and hence the grade and rank correlations are the same in this case. More generally, the “grade” of an observation is proportional to an estimate of the fraction of a population less than a given value, with the half-observation adjustment at observed values. Thus this corresponds to one possible treatment of tied ranks. While unusual, the term “grade correlation” is still in use.


Interpretation

The sign of the Spearman correlation indicates the direction of association between ''X'' (the independent variable) and ''Y'' (the dependent variable). If ''Y'' tends to increase when ''X'' increases, the Spearman correlation coefficient is positive. If ''Y'' tends to decrease when ''X'' increases, the Spearman correlation coefficient is negative. A Spearman correlation of zero indicates that there is no tendency for ''Y'' to either increase or decrease when ''X'' increases. The Spearman correlation increases in magnitude as ''X'' and ''Y'' become closer to being perfectly monotone functions of each other. When ''X'' and ''Y'' are perfectly monotonically related, the Spearman correlation coefficient becomes 1. A perfectly monotone increasing relationship implies that for any two pairs of data values and , that and always have the same sign. A perfectly monotone decreasing relationship implies that these differences always have opposite signs. The Spearman correlation coefficient is often described as being "nonparametric". This can have two meanings. First, a perfect Spearman correlation results when ''X'' and ''Y'' are related by any
monotonic function In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order ...
. Contrast this with the Pearson correlation, which only gives a perfect value when ''X'' and ''Y'' are related by a ''linear'' function. The other sense in which the Spearman correlation is nonparametric is that its exact sampling distribution can be obtained without requiring knowledge (i.e., knowing the parameters) of the joint probability distribution of ''X'' and ''Y''.


Example

In this example, the arbitrary raw data in the table below is used to calculate the correlation between the IQ of a person with the number of hours spent in front of TV per week ictitious values used Firstly, evaluate d^2_i. To do so use the following steps, reflected in the table below. # Sort the data by the first column (X_i). Create a new column x_i and assign it the ranked values 1, 2, 3, ..., ''n''. # Next, sort the data by the second column (Y_i). Create a fourth column y_i and similarly assign it the ranked values 1, 2, 3, ..., ''n''. # Create a fifth column d_i to hold the differences between the two rank columns (x_i and y_i). # Create one final column d^2_i to hold the value of column d_i squared. With d^2_i found, add them to find \sum d_i^2 = 194. The value of ''n'' is 10. These values can now be substituted back into the equation : \rho = 1 - \frac to give : \rho = 1 - \frac, which evaluates to with a ''p''-value = 0.627188 (using the ''t''-distribution). That the value is close to zero shows that the correlation between IQ and hours spent watching TV is very low, although the negative value suggests that the longer the time spent watching television the lower the IQ. In the case of ties in the original values, this formula should not be used; instead, the Pearson correlation coefficient should be calculated on the ranks (where ties are given ranks, as described above).


Confidence intervals

Confidence intervals for Spearman's ''ρ'' can be easily obtained using the Jackknife Euclidean likelihood approach in de Carvalho and Marques (2012). The confidence interval with level \alpha is based on a Wilks' theorem given in the latter paper, and is given by : \left\, where \Chi^2_ is the \alpha quantile of a chi-square distribution with one degree of freedom, and the Z_i are jackknife pseudo-values. This approach is implemented in the R packag
spearmanCI


Determining significance

One approach to test whether an observed value of ''ρ'' is significantly different from zero (''r'' will always maintain ) is to calculate the probability that it would be greater than or equal to the observed ''r'', given the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
, by using a
permutation test A permutation test (also called re-randomization test) is an exact statistical hypothesis test making use of the proof by contradiction. A permutation test involves two or more samples. The null hypothesis is that all samples come from the same di ...
. An advantage of this approach is that it automatically takes into account the number of tied data values in the sample and the way they are treated in computing the rank correlation. Another approach parallels the use of the
Fisher transformation In statistics, the Fisher transformation (or Fisher ''z''-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient ''r'' is near 1 or -1, its distribution is high ...
in the case of the Pearson product-moment correlation coefficient. That is,
confidence intervals In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
and
hypothesis test A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
s relating to the population value ''ρ'' can be carried out using the Fisher transformation: : F(r) = \frac \ln\frac = \operatorname r. If ''F''(''r'') is the Fisher transformation of ''r'', the sample Spearman rank correlation coefficient, and ''n'' is the sample size, then : z = \sqrt F(r) is a ''z''-score for ''r'', which approximately follows a standard
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
under the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
of
statistical independence Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
(). One can also test for significance using : t = r \sqrt, which is distributed approximately as Student's ''t''-distribution with degrees of freedom under the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
. A justification for this result relies on a permutation argument. A generalization of the Spearman coefficient is useful in the situation where there are three or more conditions, a number of subjects are all observed in each of them, and it is predicted that the observations will have a particular order. For example, a number of subjects might each be given three trials at the same task, and it is predicted that performance will improve from trial to trial. A test of the significance of the trend between conditions in this situation was developed by E. B. Page and is usually referred to as
Page's trend test In statistics, the Page test for multiple comparisons between ordered correlated variables is the counterpart of Spearman's rank correlation coefficient which summarizes the association of continuous variables. It is also known as Page's trend tes ...
for ordered alternatives.


Correspondence analysis based on Spearman's ''ρ''

Classic correspondence analysis is a statistical method that gives a score to every value of two nominal variables. In this way the Pearson
correlation coefficient A correlation coefficient is a numerical measure of some type of correlation, meaning a statistical relationship between two variables. The variables may be two columns of a given data set of observations, often called a sample, or two components ...
between them is maximized. There exists an equivalent of this method, called grade correspondence analysis, which maximizes Spearman's ''ρ'' or Kendall's τ.


Approximating Spearman's ''ρ'' from a stream

There are two existing approaches to approximating the Spearman's rank correlation coefficient from streaming data. The first approach involves coarsening the joint distribution of (X,Y). For continuous X, Y values: m_, m_ cutpoints are selected for X and Y respectively, discretizing these random variables. Default cutpoints are added at -\infty and \infty. A count matrix of size (m_+1) \times (m_+1), denoted M, is then constructed where M ,j/math> stores the number of observations that fall into the two-dimensional cell indexed by (i,j). For streaming data, when a new observation arrives, the appropriate M ,j/math> element is incremented. The Spearman's rank correlation can then be computed, based on the count matrix M, using linear algebra operations (Algorithm 2). Note that for discrete random variables, no discretization procedure is necessary. This method is applicable to stationary streaming data as well as large data sets. For non-stationary streaming data, where the Spearman's rank correlation coefficient may change over time, the same procedure can be applied, but to a moving window of observations. When using a moving window, memory requirements grow linearly with chosen window size. The second approach to approximating the Spearman's rank correlation coefficient from streaming data involves the use of Hermite series based estimators. These estimators, based on
Hermite polynomials In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence. The polynomials arise in: * signal processing as Hermitian wavelets for wavelet transform analysis * probability, such as the Edgeworth series, as well a ...
, allow sequential estimation of the probability density function and cumulative distribution function in univariate and bivariate cases. Bivariate Hermite series density estimators and univariate Hermite series based cumulative distribution function estimators are plugged into a large sample version of the Spearman's rank correlation coefficient estimator, to give a sequential Spearman's correlation estimator. This estimator is phrased in terms of linear algebra operations for computational efficiency (equation (8) and algorithm 1 and 2). These algorithms are only applicable to continuous random variable data, but have certain advantages over the count matrix approach in this setting. The first advantage is improved accuracy when applied to large numbers of observations. The second advantage is that the Spearman's rank correlation coefficient can be computed on non-stationary streams without relying on a moving window. Instead, the Hermite series based estimator uses an exponential weighting scheme to track time-varying Spearman's rank correlation from streaming data, which has constant memory requirements with respect to "effective" moving window size.


Software implementations

* R's statistics base-package implements the tes
">cor.test(x, y, method = "spearman")
in its "stats" package (also cor(x, y, method = "spearman") will work. The packag

computes confidence intervals. *
Stata Stata (, , alternatively , occasionally stylized as STATA) is a general-purpose statistical software package developed by StataCorp for data manipulation, visualization, statistics, and automated reporting. It is used by researchers in many fie ...
implementation: spearman varlist calculates all pairwise correlation coefficients for all variables in varlist. *
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation ...
implementation: ,p= corr(x,y,'Type','Spearman') where r is the Spearman's rank correlation coefficient, p is the p-value, and x and y are vectors. *
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (pro ...
has many different implementation of the spearman correlation statistic: it can be computed with th
spearmanr
function of the scipy.stats module, as well as with the DataFrame.corr(method='spearman') method from th

library, and the corr(x, y, method='spearman') function from the statistical packag


See also

*
Kendall tau rank correlation coefficient In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between two measured quantities. A τ test is a n ...
*
Chebyshev's sum inequality In mathematics, Chebyshev's sum inequality, named after Pafnuty Chebyshev, states that if :a_1 \geq a_2 \geq \cdots \geq a_n \quad and \quad b_1 \geq b_2 \geq \cdots \geq b_n, then : \sum_^n a_k b_k \geq \left(\sum_^n a_k\right)\!\!\left(\sum_^n ...
,
rearrangement inequality In mathematics, the rearrangement inequality states that x_n y_1 + \cdots + x_1 y_n \leq x_ y_1 + \cdots + x_ y_n \leq x_1 y_1 + \cdots + x_n y_n for every choice of real numbers x_1 \leq \cdots \leq x_n \quad \text \quad y_1 \leq \cdots \leq y_n ...
(These two articles may shed light on the mathematical properties of Spearman's ''ρ''.) *
Distance correlation In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is ze ...
*
Polychoric correlation In statistics, polychoric correlation{{Cite web, url=https://support.sas.com/documentation/cdl/en/procstat/65543/HTML/default/viewer.htm#procstat_corr_details14.htm, title=Base SAS(R) 9.3 Procedures Guide: Statistical Procedures, Second Edition, we ...


References


Further reading

* Corder, G. W. & Foreman, D. I. (2014). Nonparametric Statistics: A Step-by-Step Approach, Wiley. . * * * * * *


External links


Table of critical values of ''ρ'' for significance with small samples

Spearman’s Rank Correlation Coefficient – Excel Guide
sample data and formulae for Excel, developed by the
Royal Geographical Society The Royal Geographical Society (with the Institute of British Geographers), often shortened to RGS, is a learned society and professional body for geography based in the United Kingdom. Founded in 1830 for the advancement of geographical scien ...
. {{DEFAULTSORT:Spearman's Rank Correlation Coefficient Covariance and correlation Information retrieval evaluation Nonparametric statistics Statistical tests